147 research outputs found

    Best Approximation With Geometric Constraints

    Get PDF
    This is a study of best approximation with certain geometric constraints. Two major problem areas are considered: best Lp approximation to a function in Lp (0,1) by convex functions, (m, n)-convex functions, (m, n)-convex functions and (m, n)-convex splines, for 1 \u3c p \u3c ∞ , and best uniform approximation to a continuous function by convex functions, quasi-convex functions and piecewise monotone functions

    Multi-Grade Deep Learning for Partial Differential Equations with Applications to the Burgers Equation

    Full text link
    We develop in this paper a multi-grade deep learning method for solving nonlinear partial differential equations (PDEs). Deep neural networks (DNNs) have received super performance in solving PDEs in addition to their outstanding success in areas such as natural language processing, computer vision, and robotics. However, training a very deep network is often a challenging task. As the number of layers of a DNN increases, solving a large-scale non-convex optimization problem that results in the DNN solution of PDEs becomes more and more difficult, which may lead to a decrease rather than an increase in predictive accuracy. To overcome this challenge, we propose a two-stage multi-grade deep learning (TS-MGDL) method that breaks down the task of learning a DNN into several neural networks stacked on top of each other in a staircase-like manner. This approach allows us to mitigate the complexity of solving the non-convex optimization problem with large number of parameters and learn residual components left over from previous grades efficiently. We prove that each grade/stage of the proposed TS-MGDL method can reduce the value of the loss function and further validate this fact through numerical experiments. Although the proposed method is applicable to general PDEs, implementation in this paper focuses only on the 1D, 2D, and 3D viscous Burgers equations. Experimental results show that the proposed two-stage multi-grade deep learning method enables efficient learning of solutions of the equations and outperforms existing single-grade deep learning methods in predictive accuracy. Specifically, the predictive errors of the single-grade deep learning are larger than those of the TS-MGDL method in 26-60, 4-31 and 3-12 times, for the 1D, 2D, and 3D equations, respectively

    Superconvergence of the Iterated Galerkin Methods for Hammerstein Equations

    Get PDF
    In this paper, the well-known iterated Galerkin method and iterated Galerkin-Kantorovich regularization method for approximating the solution of Fredholm integral equations of the second kind are generalized to Hammerstein equations with smooth and weakly singular kernels. The order of convergence of the Galerkin method and those of superconvergence of the iterated methods are analyzed. Numerical examples are presented to illustrate the superconvergence of the iterated Galerkin approximation for Hammerstein equations with weakly singular kernels. © 1996, Society for Industrial and Applied Mathematic

    Gauss-Type Quadratures for Weakly Singular Integrals and Their Application to Fredholm Integral Equations of the Second Kind

    Get PDF
    In this paper we establish Gauss-type quadrature formulas for weakly singular integrals. An application of the quadrature scheme is given to obtain numerical solutions of the weakly singular Fredholm integral equation of the second kind. We call this method a discrete product-integration method since the weights involved in the standard product-integration method are computed numerically
    • …
    corecore